18 research outputs found

    Building an Aerial-Ground Robotics System for Precision Farming: An Adaptable Solution

    Full text link
    The application of autonomous robots in agriculture is gaining increasing popularity thanks to the high impact it may have on food security, sustainability, resource use efficiency, reduction of chemical treatments, and the optimization of human effort and yield. With this vision, the Flourish research project aimed to develop an adaptable robotic solution for precision farming that combines the aerial survey capabilities of small autonomous unmanned aerial vehicles (UAVs) with targeted intervention performed by multi-purpose unmanned ground vehicles (UGVs). This paper presents an overview of the scientific and technological advances and outcomes obtained in the project. We introduce multi-spectral perception algorithms and aerial and ground-based systems developed for monitoring crop density, weed pressure, crop nitrogen nutrition status, and to accurately classify and locate weeds. We then introduce the navigation and mapping systems tailored to our robots in the agricultural environment, as well as the modules for collaborative mapping. We finally present the ground intervention hardware, software solutions, and interfaces we implemented and tested in different field conditions and with different crops. We describe a real use case in which a UAV collaborates with a UGV to monitor the field and to perform selective spraying without human intervention.Comment: Published in IEEE Robotics & Automation Magazine, vol. 28, no. 3, pp. 29-49, Sept. 202

    Comparing Metrics for Evaluating 3D Map Quality in Natural Environments

    No full text
    International audienceIn this study, we focus on addressing the challenge of measuring the 3D-map quality in natural environments. Specifically, we consider scenarios where the map is built using a robot’s 3D-Lidar point cloud observations, with potential uncertainty in the robot localization. When considering a natural environment, such as a park or a forest, unstructured by nature, another difficulty arises: the data becomes extremely sparse. As a result, measuring the map quality becomes even more challenging. This study aims to compare the effectiveness of various metrics in measuring the 3D-map quality. Firstly, we evaluate these metrics in a controlled experimental setup, where the reconstructed map is created by progressively degrading the reference map using different degradation models. Secondly, we compare their ability to measure 3D-map quality at a local level, across various simulated environments, ranging from structured to unstructured. Finally, we conduct a qualitative comparison to demonstrate the robustness of certain metrics to noise in the robot localization. This qualitative comparison is done both in simulation and in a real world experiment. Ultimately, we synthesize the properties of these metrics and provide practical recommendations for their selection

    Design and Implementation of Computer Vision based In-Row Weeding System

    No full text
    Autonomous robotic weeding systems in precision farming have demonstrated their full potential to alleviate the current dependency on herbicides or pesticides by introducing selective spraying or mechanical weed removal modules, thus reducing the environmental pollution and improving the sustainability. However, most previous works require fast weed detection system to achieve real-time treatment. In this paper , a novel computer vision based weeding control system is presented, where a non-overlapping multi-camera system is introduced to compensate the indeterminate classification delays, thus allowing for more complicated and advanced detection algorithms, e.g. deep learning based methods. The suitable tracking and control strategies are developed to achieve accurate and robust in-row weed treatment, and the performance of the proposed system is evaluated in different terrain conditions in the presence of various delays

    Measuring 3D-reconstruction quality in probabilistic volumetric maps with the Wasserstein Distance

    No full text
    In this study, we address the challenge of measuring 3D-reconstruction quality in large unstructured environments, when the map is built with uncertainty in the robot localization. The challenge lies in measuring the quality of a reconstruction against the ground-truth when the data is extremely sparse and where traditional methods, such as surface distance metrics, fail. We propose a complete methodology to measure the quality of the reconstruction, on a local level, in both structured and unstructured environments. Building upon the fact that a common map representation in robotics is the probabilistic volumetric map, we propose, along this methodology, to use a novel metric to measure the map quality based directly on the voxels' occupancy likelihood: the Wasserstein Distance. Finally, we evaluate this Wasserstein Distance metric in simulation, under different level of noise in the robot localization, and in a real world experiment, demonstrating the robustness of our method

    Next-Best-View selection from observation viewpoint statistics

    No full text
    International audienceThis paper discusses the problem of autonomously constructing a qualitative map of an unknown 3D environment using a 3D-Lidar. In this case, how can we effectively integrate the quality of the 3D-reconstruction into the selection of the Next-Best-View? Here, we address the challenge of estimating the quality of the currently reconstructed map in order to guide the exploration policy, in the absence of ground truth, which is typically the case in exploration scenarios. Our key contribution is a method to build a prior on the quality of the reconstruction from the data itself. Indeed, we not only prove that this quality depends on statistics from the observation viewpoints, but we also demonstrate that we can enhance the quality of the reconstruction by leveraging these statistics during the exploration. To do so, we propose to integrate them into Next-Best-View selection policies, in which the information gain is directly computed based on these statistics. Finally, we demonstrate the robustness of our approach, even in challenging environments, with noise in the robot localization, and we further validate it through a real-world experiment
    corecore